Goto

Collaborating Authors

 national institute


Leading US Research Lab Appears to Be Squeezing Out Foreign Scientists

WIRED

House Democrats are demanding answers from the National Institute of Standards and Technology and urging it to halt rumored changes they say could undermine its mission. One of the US government's top scientific research labs is taking steps that could drive away foreign scientists, a shift lawmakers and sources tell WIRED could cost the country valuable expertise and damage the agency's credibility. The National Institute of Standards and Technology (NIST) helps determine the frameworks underpinning everything from cybersecurity to semiconductor manufacturing. Some of NIST's recent work includes establishing guidelines for securing AI systems and identifying health concerns with air purifiers and firefighting gloves. Many of the agency's thousands of employees, postdoctoral scientists, contractors, and guest researchers are brought in from around the world for their specialized expertise.


Supplemental Information for " Diverse Community Data for Benchmarking Data Privacy Algorithms " October 27, 2023 Supplemental Information Contents

Neural Information Processing Systems

SDNist are intended as tools to encourage investigation and discussion of deiden-tification algorithms, and they are not intended or suitable for product evaluation. The National Institute of Standards and Technology does not endorse any algorithm included in these resources.


NAD Supplement 101: Possible Benefits and Precautions Explained (2026)

WIRED

What NAD+? Here's how it works in your body, why it matters, and if supplementation is worth the hype. It's more than likely that the NAD+ supplement craze has already crossed your path. The Biebers have infused it. Joe Rogan has podcasted about it. Gwyneth Paltrow swears by it and, of course, sells her own Youth-Boost NAD+ Peptide Rich Cream . NAD+ (short for nicotinamide adenine dinucleotide) is a coenzyme that your body makes naturally--it contributes to energy production and immune function, among other things. It reflects a broader shift in how people think about healthy aging and extending their healthspan overall .


QGShap: Quantum Acceleration for Faithful GNN Explanations

Jena, Haribandhu, Shivottam, Jyotirmaya, Mishra, Subhankar

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have become indispensable in critical domains such as drug discovery, social network analysis, and recommendation systems, yet their black-box nature hinders deployment in scenarios requiring transparency and accountability. While Shapley value-based methods offer mathematically principled explanations by quantifying each component's contribution to predictions, computing exact values requires evaluating $2^n$ coalitions (or aggregating over $n!$ permutations), which is intractable for real-world graphs. Existing approximation strategies sacrifice either fidelity or efficiency, limiting their practical utility. We introduce QGShap, a quantum computing approach that leverages amplitude amplification to achieve quadratic speedups in coalition evaluation while maintaining exact Shapley computation. Unlike classical sampling or surrogate methods, our approach provides fully faithful explanations without approximation trade-offs for tractable graph sizes. We conduct empirical evaluations on synthetic graph datasets, demonstrating that QGShap achieves consistently high fidelity and explanation accuracy, matching or exceeding the performance of classical methods across all evaluation metrics. These results collectively demonstrate that QGShap not only preserves exact Shapley faithfulness but also delivers interpretable, stable, and structurally consistent explanations that align with the underlying graph reasoning of GNNs. The implementation of QGShap is available at https://github.com/smlab-niser/qgshap.


The Risk-Adjusted Intelligence Dividend: A Quantitative Framework for Measuring AI Return on Investment Integrating ISO 42001 and Regulatory Exposure

Huwyler, Hernan

arXiv.org Artificial Intelligence

Organizations investing in artificial intelligence face a fundamental challenge: traditional return on investment calculations fail to capture the dual nature of AI implementations, which simultaneously reduce certain operational risks while introducing novel exposures related to algorithmic malfunction, adversarial attacks, and regulatory liability. This research presents a comprehensive financial framework for quantifying AI project returns that explicitly integrates changes in organizational risk profiles. The methodology addresses a critical gap in current practice where investment decisions rely on optimistic benefit projections without accounting for the probabilistic costs of AI-specific threats including model drift, bias-related litigation, and compliance failures under emerging regulations such as the European Union Artificial Intelligence Act and ISO/IEC 42001. Drawing on established risk quantification methods, including annual loss expectancy calculations and Monte Carlo simulation techniques, this framework enables practitioners to compute net benefits that incorporate both productivity gains and the delta between pre-implementation and post-implementation risk exposures. The analysis demonstrates that accurate AI investment evaluation requires explicit modeling of control effectiveness, reserve requirements for algorithmic failures, and the ongoing operational costs of maintaining model performance. Practical implications include specific guidance for establishing governance structures, conducting phased validations, and integrating risk-adjusted metrics into capital allocation decisions, ultimately enabling evidence-based AI portfolio management that satisfies both fiduciary responsibilities and regulatory mandates.



An Evaluation Framework for Network IDS/IPS Datasets: Leveraging MITRE ATT&CK and Industry Relevance Metrics

Tori, Adrita Rahman, Hasan, Khondokar Fida

arXiv.org Artificial Intelligence

The performance of Machine Learning (ML) and Deep Learning (DL)-based Intrusion Detection and Prevention Systems (IDS/IPS) is critically dependent on the relevance and quality of the datasets used for training and evaluation. However, current AI model evaluation practices for developing IDS/IPS focus predominantly on accuracy metrics, often overlooking whether datasets represent industry-specific threats. To address this gap, we introduce a novel multi-dimensional framework that integrates the MITRE ATT&CK knowledge base for threat intelligence and employs five complementary metrics that together provide a comprehensive assessment of dataset suitability. Methodologically, this framework combines threat intelligence, natural language processing, and quantitative analysis to assess the suitability of datasets for specific industry contexts. Applying this framework to nine publicly available IDS/IPS datasets reveals significant gaps in threat coverage, particularly in the healthcare, energy, and financial sectors. In particular, recent datasets (e.g., CIC-IoMT, CIC-UNSW-NB15) align better with sector-specific threats, whereas others, like CICIoV-24, underperform despite their recency. Our findings provide a standardized, interpretable approach for selecting datasets aligned with sector-specific operational requirements, ultimately enhancing the real-world effectiveness of AI-driven IDS/IPS deployments. The efficiency and practicality of the framework are validated through deployment in a real-world case study, underscoring its capacity to inform dataset selection and enhance the effectiveness of AI-driven IDS/IPS in operational environments.



Automated Segmentation of Coronal Brain Tissue Slabs for 3D Neuropathology

Ramirez, Jonathan Williams, Zemlyanker, Dina, Deden-Binder, Lucas, Herisse, Rogeny, Pallares, Erendira Garcia, Gopinath, Karthik, Gazula, Harshvardhan, Mount, Christopher, Kozanno, Liana N., Marshall, Michael S., Connors, Theresa R., Frosch, Matthew P., Montine, Mark, Oakley, Derek H., Mac Donald, Christine L., Keene, C. Dirk, Hyman, Bradley T., Iglesias, Juan Eugenio

arXiv.org Artificial Intelligence

Advances in image registration and machine learning have recently enabled volumetric analysis of postmortem brain tissue from conventional photographs of coronal slabs, which are routinely collected in brain banks and neuropathology laboratories worldwide. One caveat of this methodology is the requirement of segmentation of the tissue from photographs, which currently requires costly manual intervention. In this article, we present a deep learning model to automate this process. The automatic segmentation tool relies on a U-Net architecture that was trained with a combination of 1,414 manually segmented images of both fixed and fresh tissue, from specimens with varying diagnoses, photographed at two different sites. Automated model predictions on a subset of photographs not seen in training were analyzed to estimate performance compared to manual labels, including both inter- and intra-rater variability. Our model achieved a median Dice score over 0.98, mean surface distance under 0.4mm, and 95\% Hausdorff distance under 1.60mm, which approaches inter-/intra-rater levels. Our tool is publicly available at surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools.


Building a Silver-Standard Dataset from NICE Guidelines for Clinical LLMs

Ding, Qing, Zhang, Eric Hua Qing, Jozsa, Felix, Ive, Julia

arXiv.org Artificial Intelligence

Large language models (LLMs) are increasingly used in healthcare, yet standardised benchmarks for evaluating guideline-based clinical reasoning are missing. This study introduces a validated dataset derived from publicly available guidelines across multiple diagnoses. The dataset was created with the help of GPT and contains realistic patient scenarios, as well as clinical questions. We benchmark a range of recent popular LLMs to showcase the validity of our dataset. The framework supports systematic evaluation of LLMs' clinical utility and guideline adherence.